Co-Learning of Recursive Languages from Positive Data

نویسندگان

  • Rusins Freivalds
  • Thomas Zeugmann
چکیده

The present paper deals with the co-learnability of enumerable families L of uniformly recursive languages from positive data. This refers to the following scenario. A family L of target languages as well as hypothesis space for it are speci ed. The co-learner is fed eventually all positive examples of an unknown target language L chosen from L. The target language L is successfully colearned if and only if the co-learner can de nitely delete all but one possible hypotheses, and the remaining one has to correctly describe L. We investigate the capabilities of co-learning in dependence on the choice of the hypothesis space, and compare it to language learning in the limit from positive data. We distinguish between class preserving learning (L has to be co-learned with respect to some suitably chosen enumeration of all and only the languages from L), class comprising learning (L has to be co-learned with respect to some hypothesis space containing at least all the languages from L), and absolute co-learning (L has to be co-learned with respect to all class preserving hypothesis spaces for L). Our results are manyfold. First, it is shown that co-learning is exactly as powerful as learning in the limit provided the hypothesis space is appropriately chosen. However, while learning in the limit is insensitive to the particular choice of the hypothesis space, the power of co-learning crucially depends on it. Therefore we study properties a hypothesis space should have in order to be suitable for co-learning. Finally, we derive su cient conditions for absolute co-learnabilty, and separate it from nite learning. The rst author was supported by the grant No. 93.599 form the Latvian Science Council.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Indexed Families of Recursive Languages from Positive Data

In the past 40 years, research on inductive inference has developed along different lines, concerning different formalizations of learning models and in particular of target concepts for learning. One common root of many of these is Gold’s model of identification in the limit. This model has been studied for learning recursive functions, recursively enumerable languages, and recursive languages...

متن کامل

Mind Change Speed-up for Learning Languages from Positive Data

Within the frameworks of learning in the limit of indexed classes of recursive languages from positive data and automatic learning in the limit of indexed classes of regular languages (with automatically computable sets of indices), we study the problem of minimizing the maximum number of mind changes FM(n) by a learner M on all languages with indices not exceeding n. For inductive inference of...

متن کامل

Learning Recursive Languages with Bounded Mind Changes

In the present paper we study the learnability of enumerable families L of uniformly recursive languages in dependence on the number of allowed mind changes, i.e., with respect to a well{studied measure of e ciency. We distinguish between exact learnability (L has to be inferred w.r.t. L) and class preserving learning (L has to be inferred w.r.t. some suitable chosen enumeration of all the lang...

متن کامل

Learning indexed families of recursive languages from positive data: A survey

In the past 40 years, research on inductive inference has developed along different lines, e.g., in the formalizations used, and in the classes of target concepts considered. One common root of many of these formalizations is Gold’s model of identification in the limit. This model has been studied for learning recursive functions, recursively enumerable languages, and recursive languages, refle...

متن کامل

Trading Monotonicity Demands versus Eciency

The present paper deals with the learnability of indexed families L of uniformly recursive languages from positive data. We consider the in uence of three monotonicity demands and their dual counterparts to the e ciency of the learning process. The e ciency of learning is measured in dependence on the number of mind changes a learning algorithm is allowed to perform. The three notions of (dual)...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1996